28 research outputs found

    Analysis and characterisation of botnet scan traffic

    Get PDF
    Botnets compose a major source of malicious activity over a network and their early identification and detection is considered as a top priority by security experts. The majority of botmasters rely heavily on a scan procedure in order to detect vulnerable hosts and establish their botnets via a command and control (C&C) server. In this paper we examine the statistical characteristics of the scan process invoked by the Mariposa and Zeus botnets and demonstrate the applicability of conditional entropy as a robust metric for profiling it using real pre-captured operational data. Our analysis conducted on real datasets demonstrates that the distributional behaviour of conditional entropy for Mariposa and Zeus-related scan flows differs significantly from flows manifested by the commonly used NMAP scans. In contrast with the typically used by attackers Stealth and Connect NMAP scans, we show that consecutive scanning flows initiated by the C&C servers of the examined botnets exhibit a high dependency between themselves in regards of their conditional entropy. Thus, we argue that the observation of such scan flows under our proposed scheme can sufficiently aid network security experts towards the adequate profiling and early identification of botnet activity

    Providing producer mobility support in NDN through proactive data replication

    Get PDF
    Email Print Request Permissions Named Data Networking (NDN) is a novel architecture expected to overcome limitations of the current Internet. User mobility is one of the most relevant limitations to be addressed. NDN supports consumer mobility by design but fails to offer the same level of support for producer mobility. Existing approaches to extend NDN are host-centric, which conflicts with NDN principles, and provide limited support for producer mobility. This paper proposes a content-centric strategy that replicates and pushes objects proactively, and unlike previous approaches, takes full advantage of NDN routing and caching features. We compare the proposed strategy with default NDN mechanisms regarding content availability, consumer performance, and network overhead. The evaluation results indicate that our strategy can increase the hit rate of objects by at least 46% and reduce their retrieval time by over 60%, while not adding significant overhead

    Modelling video rate evolution in adaptive bitrate selection

    Get PDF
    Adaptive bitrate selection adjusts the quality of HTTP streaming video to a changing context. A number of different schemes have been proposed that use buffer state in the selection of the appropriate video rate. However, models describing the relationship between video quality levels and buffer occupancy are mostly based on heuristics, which often results in unstable and/or suboptimal quality. In this paper, we present a QoE-aware video rate evolution model based on buffer state changes. The scheme is evaluated within a real world Internet environment, where it is shown to improve the stability of the video rate. Up to 27% gain in average video rate can be achieved compared to the baseline ABR. The average throughput utilisation at a steady-state reaches 100% in some of the investigated scenarios

    A multi-level resilience framework for unified networked environments

    Get PDF
    Networked infrastructures underpin most social and economical interactions nowadays and have become an integral part of the critical infrastructure. Thus, it is crucial that heterogeneous networked environments provide adequate resilience in order to satisfy the quality requirements of the user. In order to achieve this, a coordinated approach to confront potential challenges is required. These challenges can manifest themselves under different circumstances in the various infrastructure components. The objective of this paper is to present a multi-level resilience approach that goes beyond the traditional monolithic resilience schemes that focus mainly on one infrastructure component. The proposed framework considers four main aspects, i.e. users, application, network and system. The latter three are part of the technical infrastructure while the former profiles the service user. Under two selected scenarios this paper illustrates how an integrated approach coordinating knowledge from the different infrastructure elements allows a more effective detection of challenges and facilitates the use of autonomic principles employed during the remediation against challenges

    Multi-level resilience in networked environments:concepts and principles

    Get PDF
    Resilience is an essential property for critical networked environments such as utility networks (e.g. gas, water and electricity grids), industrial control systems, and communication networks. Due to the complexity of such networked environments achieving resilience is multi-dimensional since it involves a range of factors such as redundancy and connectivity of different system components as well as availability, security, dependability and fault tolerance. Hence, it is of importance to address resilience within a unified framework that considers such factors and further enables the practical composition of resilience mechanisms. In this paper we firstly introduce the concepts and principles of Multi-Level Resilience (MLR) and then demonstrate its applicability in a particular cloud-based scenario

    Resilience support in software-defined networking:a survey

    Get PDF
    Software-defined networking (SDN) is an architecture for computer networking that provides a clear separation between network control functions and forwarding operations. The abstractions supported by this architecture are intended to simplify the implementation of several tasks that are critical to network operation, such as routing and network management. Computer networks have an increasingly important societal role, requiring them to be resilient to a range of challenges. Previously, research into network resilience has focused on the mitigation of several types of challenges, such as natural disasters and attacks. Capitalizing on its benefits, including increased programmability and a clearer separation of concerns, significant attention has recently focused on the development of resilience mechanisms that use software-defined networking approaches. In this article, we present a survey that provides a structured overview of the resilience support that currently exists in this important area. We categorize the most recent research on this topic with respect to a number of resilience disciplines. Additionally, we discuss the lessons learned from this investigation, highlight the main challenges faced by SDNs moving forward, and outline the research trends in terms of solutions to mitigate these challenges

    Tool support for the evaluation of anomaly traffic classification for network resilience

    Get PDF
    Resilience is the ability of the network to maintain an acceptable level of operation in the face of anomalies, such as malicious attacks, operational overload or misconfigurations. Techniques for anomaly traffic classification are often used to characterize suspicious network traffic, thus supporting anomaly detection schemes in network resilience strategies. In this paper, we extend the PReSET toolset to allow the investigation, comparison and analysis of algorithms for anomaly traffic classification based on machine learning. PReSET was designed to allow the simulation-based evaluation of resilience strategies, thus enabling the comparison of optimal configurations and policies for combating different types of attacks (e.g., DDoS attacks, worms) and other anomalies. In such resilience strategies, policies written in the Ponder2 language can be used to activate/reconfigure traffic classification modules and other mechanisms (e.g., traffic shaping), depending on monitored results in the simulation environment. Our results show that PReSET can be a valuable tool for network operators to evaluate anomaly traffic classification techniques in terms of standard performance metrics

    Anomaly detection in the cloud using data density

    Get PDF
    Cloud computing is now extremely popular because of its use of elastic resources to provide optimized, cost-effective and on-demand services. However, clouds may be subject to challenges arising from cyber attacks including DoS and malware, as well as from sheer complexity problems that manifest themselves as anomalies. Anomaly detection techniques are used increasingly to improve the resilience of cloud environments and indirectly reduce the cost of recovery from outages. Most anomaly detection techniques are computation ally expensive in a cloud context, and often require problem-specific parameters to be predefined in advance, impairing their use in real-time detection. Aiming to overcome these problems, we propose a technique for anomaly detection based on data density. The density is computed recursively, so the technique is memory-less and unsupervised, and therefore suitable for real-time cloud environments. We demonstrate the efficacy of the proposed technique using an emulated dataset from a testbed, under various attack types and intensities, and in the face of VM migration. The obtained results, which include precision, recall, accuracy, F-score and G-score, show that network level attacks are detectable with high accuracy

    Appliance-level Short-term Load Forecasting using Deep Neural Networks

    Get PDF
    The recently employed demand-response (DR) model enabled by the transformation of the traditional power grid to the SmartGrid (SG) allows energy providers to have a clearer understanding of the energy utilisation of each individual household within their administrative domain. Nonetheless, the rapid growth of IoT-based domestic appliances within each household in conjunction with the varying and hard-to-predict customer-specific energy requirements is regarded as a challenge with respect to accurately profiling and forecasting the day-to-day or week-to-week appliance-level power consumption demand. Such a forecast is considered essential in order to compose a granular and accurate aggregate-level power consumption forecast for a given household, identify faulty appliances, and assess potential security and resilience issues both from an end-user as well as from an energy provider perspective. Therefore, in this paper we investigate techniques that enable this and propose the applicability of Deep Neural Networks (DNNs) for short-term appliance-level power profiling and forecasting. We demonstrate their superiority over the past heavily used Support Vector Machines (SVMs) in terms of prediction accuracy and computational performance with experiments conducted over real appliance-level dataset gathered in four residential households

    Similitude:decentralised adaptation in large-scale P2P recommenders

    Get PDF
    Decentralised recommenders have been proposed to deliver privacy-preserving, personalised and highly scalable on-line recommendations. Current implementations tend, however, to rely on a hard-wired similarity metric that cannot adapt. This constitutes a strong limitation in the face of evolving needs. In this paper, we propose a framework to develop dynamically adaptive decentralised recommendation systems. Our proposal supports a decentralised form of adaptation, in which individual nodes can independently select, and update their own recommendation algorithm, while still collectively contributing to the overall system’s mission. Keyword
    corecore